Stat 260 / CS 294 : Randomized Algorithms for Matrices and Data Lecture 5 - 09 / 18 / 2013 Lecture 5 : Matrix Multiplication , Cont . ; and Random Projections

نویسنده

  • Michael Mahoney
چکیده

Here, we will provide a spectral norm bound for the error of the approximation constructed by the BasicMatrixMultiplication algorithm. Recall that, given as input a m × n matrix A and an n× p matrix B, this algorithm randomly samples c columns of A and the corresponding rows of B to construct a m× c matrix C and a c× p matrix R such that CR ≈ AB, in the sense that some matrix norm ||AB −CR|| is small. The Frobenius norm bound we established before immediately implies a bound for the spectral norm, but in some cases we will need a better bound than can be obtained in this manner. Since, in this semester, we will only need a spectral norm bound for the spectial case that B = AT , that is all that we will consider here.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Randomized Algorithms for Matrices and Data Lecture 1 - 09 / 04 / 2013 Lecture 1 : Introduction and Overview

This course will cover recent developments in randomized matrix algorithms of interest in large-scale machine learning and statistical data analysis applications. By this, we will mean basic algorithms for fundamental matrix problems—such as matrix multiplication, least-squares regression, lowrank matrix approximation, and so on—that use randomization in some nontrivial way. This area goes by t...

متن کامل

CS 294 : Randomized Algorithms for Matrices and Data Lecture 6 - 09 / 23 / 2013 Lecture 6 : Sampling / Projections for Least - squares Approximation

In many applications, we want to find an approximate solution to a problem or set of equations that, for noise reasons or whatever other reasons, does not have a solution, or not unrelatedly does not have a unique solution. A canonical example of this is given by the very overconstrained (i.e., overdetermined) least-squares (LS) problem, and this will be our focus for the next several classes. ...

متن کامل

Lecture 9 : Fast Random Projections and FJLT , cont

Warning: these notes are still very rough. They provide more details on what we discussed in class, but there may still be some errors, incomplete/imprecise statements, etc. in them. We continue with the discussion from last time. There is no new reading, just the same as last class. Today, we will do the following. • Show that the two structural conditions required for good LS approximation ar...

متن کامل

Sketching Algorithms for Big Data Fall 2017 Lecture 11 — October 5 , 2017

2 Matrix Multiplication Suppose we have two matrices A ∈ Rn×d, B ∈ Rn×p, written as: A =  | · · · | a1 · · · an | · · · |  , B = −b T 1− .. −bn−  where ai ∈ Rd, bi ∈ Rp. We want to compute ATB. The naive approach for doing this requires O(ndp) for loops. There are some faster algorithms. For square matrix multiplication, the following algorithms are of less complexity O(ω): 1. ω < log2 ...

متن کامل

Lecture 10 : Fast Random Projections and FJLT , cont

• We will describe a fast algorithm to compute very fine approximations to the leverage scores of an arbitrary tall matrix. • We will describe a few subtleties to extend this basic algorithm to non-tall matrices, which is of interest in extending these LS ideas to low-rank matrix approximation. • We will describe how to use this algorithm in a fast random sampling algorithm for the LS problem (...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015